24.3 Artificial Neural Networks

343

Fig. 24.3 An artificial neural network with three hidden layers (with cells represented by circles).

On the left are the input cells (vertical shading) and on the right the output cells (horizontal shading)

number of cells. Each input cell should be connected to every cell in the second layer,

and every cell in the second layer should be connected to every cell in the third layer.

Finally, all cells in the third layer should be connected to all the cells constituting the

output layer (in some cases a single cell), which gives the result. Figure 24.3 shows

an example with three hidden layers. The architecture somewhat resembles that of

the living neural network shown in Fig. 24.2.

The connexions are channels along which information flows. The “synaptic

strengths” (or conductivities) of the individual connexions increase with the amount

of traffic along them. This is directly inspired by Hebb’s rule for natural neural

networks.

Each cell carries out a simple computation on the inputs it receives; for example

it could sum the received inputs, weighted by the synaptic strengths, and output 1 if

the sum exceeds some threshold, otherwise 0.

The network is trained (supervised learning) with inputs corresponding to known

outputs, which fixes the synaptic strengths. It can then be used to solve practical

problems. For example, one may have a set of attributes of coffee beans of unknown

origin and wish to identify their provenance. Training would be carried out with

beans of known origin, and started with random weights.

The so-called back propagation (of errors) algorithm is commonly used to dimin-

ish the error between the outputs of the network while being trained and the known

examples provided, by adjusting the weights. 9

9 Rumelhart and McClelland (1986).